Goto

Collaborating Authors

 feature matrix



Efficient FrameworksforGeneralizedLow-Rank MatrixBanditProblems

Neural Information Processing Systems

As afollow-up work, [26]further released the rank-one restriction on the action feature matrices, andtheyintroduced analgorithm LowGLOC based ontheonline-to-confidenceset conversion [2]for generalized low-rank matrix bandits with O( p (d1+d2)3rT)regret bound.


RankFeat: Rank-1FeatureRemovalfor Out-of-distributionDetection-SupplementaryMaterial-AExperimentalSetup

Neural Information Processing Systems

The source codes are implemented withPytorch 1.10.1,and We select four sub-sets as the OOD benchmark, namelyProtozoa, Microorganisms, Plants, andMollusks. Table 2 compares the performance against all thepost hocbaselines. One of the earliest work considered directly using the Maximum Softmax Probability (MSP) as the scoring function for OOD detection. In [19], the authors observed that the activations of the penultimate layer are quite different for ID and OOD data.


323746f0ae2fbd8b6f500dc2d5c5f898-Paper-Conference.pdf

Neural Information Processing Systems

Hence, in this infinite-width limit, it suffices that the smallest eigenvalue of the NTK is bounded away from0for gradient descent to reach zero loss.





Multi-View Graph Feature Propagation for Privacy Preservation and Feature Sparsity

Harari, Etzion, Unger, Moshe

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) have demonstrated remarkable success in node classification tasks over relational data, yet their effectiveness often depends on the availability of complete node features. In many real-world scenarios, however, feature matrices are highly sparse or contain sensitive information, leading to degraded performance and increased privacy risks. Furthermore, direct exposure of information can result in unintended data leakage, enabling adversaries to infer sensitive information. To address these challenges, we propose a novel Multi-view Feature Propagation (MFP) framework that enhances node classification under feature sparsity while promoting privacy preservation. MFP extends traditional Feature Propagation (FP) by dividing the available features into multiple Gaussian-noised views, each propagating information independently through the graph topology. The aggregated representations yield expressive and robust node embeddings. This framework is novel in two respects: it introduces a mechanism that improves robustness under extreme sparsity, and it provides a principled way to balance utility with privacy. Extensive experiments conducted on graph datasets demonstrate that MFP outperforms state-of-the-art baselines in node classification while substantially reducing privacy leakage. Moreover, our analysis demonstrates that propagated outputs serve as alternative imputations rather than reconstructions of the original features, preserving utility without compromising privacy. A comprehensive sensitivity analysis further confirms the stability and practical applicability of MFP across diverse scenarios. Overall, MFP provides an effective and privacy-aware framework for graph learning in domains characterized by missing or sensitive features.


Lighter-X: An Efficient and Plug-and-play Strategy for Graph-based Recommendation through Decoupled Propagation

Zheng, Yanping, Wei, Zhewei, de Hoog, Frank, Chen, Xu, Xu, Hongteng, Ye, Yuhang, Huang, Jiadeng

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) have demonstrated remarkable effectiveness in recommendation systems. However, conventional graph-based recommenders, such as LightGCN, require maintaining embeddings of size $d$ for each node, resulting in a parameter complexity of $\mathcal{O}(n \times d)$, where $n$ represents the total number of users and items. This scaling pattern poses significant challenges for deployment on large-scale graphs encountered in real-world applications. To address this scalability limitation, we propose \textbf{Lighter-X}, an efficient and modular framework that can be seamlessly integrated with existing GNN-based recommender architectures. Our approach substantially reduces both parameter size and computational complexity while preserving the theoretical guarantees and empirical performance of the base models, thereby enabling practical deployment at scale. Specifically, we analyze the original structure and inherent redundancy in their parameters, identifying opportunities for optimization. Based on this insight, we propose an efficient compression scheme for the sparse adjacency structure and high-dimensional embedding matrices, achieving a parameter complexity of $\mathcal{O}(h \times d)$, where $h \ll n$. Furthermore, the model is optimized through a decoupled framework, reducing computational complexity during the training process and enhancing scalability. Extensive experiments demonstrate that Lighter-X achieves comparable performance to baseline models with significantly fewer parameters. In particular, on large-scale interaction graphs with millions of edges, we are able to attain even better results with only 1\% of the parameter over LightGCN.


C Access to PowerGraph Dataset C.1 Dataset documentation and intended uses

Neural Information Processing Systems

The authors state here that they bear all responsibility in case of violation of rights, etc., and confirm that this We aim to extend the PowerGraph with new datasets and include additional power grid analyses, including solutions to the unit commitment problem. Over time, we plan to release new versions of the datasets and provide updates to the results for both the GNN accuracy and the explainability analysis. The authors give public free access to the PowerGraph dataset. We run a hyper-parameters grid search over different GNN models, using torch-geometric 2.3.0 [